62 research outputs found

    Coping with confounds in multivoxel pattern analysis: What should we do about reaction time differences? A comment on Todd, Nystrom & Cohen 2013

    Get PDF
    Multivoxel pattern analysis (MVPA) is a sensitive and increasingly popular method for examining differences between neural activation patterns that cannot be detected using classical mass-univariate analysis. Recently, Todd et al. (“Confounds in multivariate pattern analysis: Theory and rule representation case study”, 2013, NeuroImage 77: 157–165) highlighted a potential problem for these methods: high sensitivity to confounds at the level of individual participants due to the use of directionless summary statistics. Unlike traditional mass-univariate analyses where confounding activation differences in opposite directions tend to approximately average out at group level, group level MVPA results may be driven by any activation differences that can be discriminated in individual participants. In Todd et al.'s empirical data, factoring out differences in reaction time (RT) reduced a classifier's ability to distinguish patterns of activation pertaining to two task rules. This raises two significant questions for the field: to what extent have previous multivoxel discriminations in the literature been driven by RT differences, and by what methods should future studies take RT and other confounds into account? We build on the work of Todd et al. and compare two different approaches to remove the effect of RT in MVPA. We show that in our empirical data, in contrast to that of Todd et al., the effect of RT on rule decoding is negligible, and results were not affected by the specific details of RT modelling. We discuss the meaning of and sensitivity for confounds in traditional and multivoxel approaches to fMRI analysis. We observe that the increased sensitivity of MVPA comes at a price of reduced specificity, meaning that these methods in particular call for careful consideration of what differs between our conditions of interest. We conclude that the additional complexity of the experimental design, analysis and interpretation needed for MVPA is still not a reason to favour a less sensitive approach.National Science Foundation (U.S.). Division of Information & Intelligent Systems (Collaborative Research in Computational Neuroscience 0904625)National Institutes of Health (U.S.) (National Institute for Biomedical Imaging and Bioengineering (U.S.)/National Alliance for Medical Image Computing (U.S.) U54-EB005149) )National Institutes of Health (U.S.) (National Institute for Biomedical Imaging and Bioengineering (U.S.)/Neuroimaging Analysis Center (U.S.) P41-EB015902

    When the Whole Is Less Than the Sum of Its Parts: Maximum Object Category Information and Behavioral Prediction in Multiscale Activation Patterns.

    Get PDF
    Neural codes are reflected in complex neural activation patterns. Conventional electroencephalography (EEG) decoding analyses summarize activations by averaging/down-sampling signals within the analysis window. This diminishes informative fine-grained patterns. While previous studies have proposed distinct statistical features capable of capturing variability-dependent neural codes, it has been suggested that the brain could use a combination of encoding protocols not reflected in any one mathematical feature alone. To check, we combined 30 features using state-of-the-art supervised and unsupervised feature selection procedures (n = 17). Across three datasets, we compared decoding of visual object category between these 17 sets of combined features, and between combined and individual features. Object category could be robustly decoded using the combined features from all of the 17 algorithms. However, the combination of features, which were equalized in dimension to the individual features, were outperformed across most of the time points by the multiscale feature of Wavelet coefficients. Moreover, the Wavelet coefficients also explained the behavioral performance more accurately than the combined features. These results suggest that a single but multiscale encoding protocol may capture the EEG neural codes better than any combination of protocols. Our findings put new constraints on the models of neural information encoding in EEG

    Late disruption of central visual field disrupts peripheral perception of form and color.

    Get PDF
    Evidence from neuroimaging and brain stimulation studies suggest that visual information about objects in the periphery is fed back to foveal retinotopic cortex in a separate representation that is essential for peripheral perception. The characteristics of this phenomenon have important theoretical implications for the role fovea-specific feedback might play in perception. In this work, we employed a recently developed behavioral paradigm to explore whether late disruption to central visual space impaired perception of color. In the first experiment, participants performed a shape discrimination task on colored novel objects in the periphery while fixating centrally. Consistent with the results from previous work, a visual distractor presented at fixation ~100ms after presentation of the peripheral stimuli impaired sensitivity to differences in peripheral shapes more than a visual distractor presented at other stimulus onset asynchronies. In a second experiment, participants performed a color discrimination task on the same colored objects. In a third experiment, we further tested for this foveal distractor effect with stimuli restricted to a low-level feature by using homogenous color patches. These two latter experiments resulted in a similar pattern of behavior: a central distractor presented at the critical stimulus onset asynchrony impaired sensitivity to peripheral color differences, but, importantly, the magnitude of the effect was stronger when peripheral objects contained complex shape information. These results show a behavioral effect consistent with disrupting feedback to the fovea, in line with the foveal feedback suggested by previous neuroimaging studies

    The effect of non-communicative eye movements on joint attention.

    Get PDF
    Eye movements provide important signals for joint attention. However, those eye movements that indicate bids for joint attention often occur among non-communicative eye movements. This study investigated the influence of these non-communicative eye movements on subsequent joint attention responsivity. Participants played an interactive game with an avatar which required both players to search for a visual target on a screen. The player who discovered the target used their eyes to initiate joint attention. We compared participants’ saccadic reaction times (SRTs) to the avatar’s joint attention bids when they were preceded by non-communicative eye movements that predicted the location of the target (Predictive Search), did not predict the location of the target (Random Search), and when there were no non-communicative eye gaze movements prior to joint attention (No Search). We also included a control condition in which participants completed the same task, but responded to a dynamic arrow stimulus instead of the avatar’s eye movements. For both eye and arrow conditions, participants had slower SRTs in Random Search trials than No Search and Predictive Search trials. However, these effects were smaller for eyes than for arrows. These data suggest that joint attention responsivity for eyes is relatively stable to the presence and predictability of spatial information conveyed by non-communicative gaze. Contrastingly, random sequences of dynamic arrows had a much more disruptive impact on subsequent responsivity compared with predictive arrow sequences. This may reflect specialised social mechanisms and expertise for selectively responding to communicative eye gaze cues during dynamic interactions, which is likely facilitated by the integration of ostensive eye contact cue

    Executive function and fluid intelligence after frontal lobe lesions

    Get PDF
    Many tests of specific ‘executive functions’ show deficits after frontal lobe lesions. These deficits appear on a background of reduced fluid intelligence, best measured with tests of novel problem solving. For a range of specific executive tests, we ask how far frontal deficits can be explained by a general fluid intelligence loss. For some widely used tests, e.g. Wisconsin Card Sorting, we find that fluid intelligence entirely explains frontal deficits. When patients and controls are matched on fluid intelligence, no further frontal deficit remains. For these tasks too, deficits are unrelated to lesion location within the frontal lobe. A second group of tasks, including tests of both cognitive (e.g. Hotel, Proverbs) and social (Faux Pas) function, shows a different pattern. Deficits are not fully explained by fluid intelligence and the data suggest association with lesions in the right anterior frontal cortex. Understanding of frontal lobe deficits may be clarified by separating reduced fluid intelligence, important in most or all tasks, from other more specific impairments and their associated regions of damage

    Unconstrained multivariate EEG decoding can help detect lexical-semantic processing in individual children

    Get PDF
    Funder: Macquarie UniversityAbstract: In conditions such as minimally-verbal autism, standard assessments of language comprehension are often unreliable. Given the known heterogeneity within the autistic population, it is crucial to design tests of semantic comprehension that are sensitive in individuals. Recent efforts to develop neural signals of language comprehension have focused on the N400, a robust marker of lexical-semantic violation at the group level. However, homogeneity of response in individual neurotypical children has not been established. Here, we presented 20 neurotypical children with congruent and incongruent visual animations and spoken sentences while measuring their neural response using electroencephalography (EEG). Despite robust group-level responses, we found high inter-individual variability in response to lexico-semantic anomalies. To overcome this, we analysed our data using temporally and spatially unconstrained multivariate pattern analyses (MVPA), supplemented by descriptive analyses to examine the timecourse, topography, and strength of the effect. Our results show that neurotypical children exhibit heterogenous responses to lexical-semantic violation, implying that any application to heterogenous disorders such as autism spectrum disorder will require individual-subject analyses that are robust to variation in topology and timecourse of neural responses
    corecore